Goto

Collaborating Authors

 research center



A Survey on Large Language Models for Communication, Network, and Service Management: Application Insights, Challenges, and Future Directions

Boateng, Gordon Owusu, Sami, Hani, Alagha, Ahmed, Elmekki, Hanae, Hammoud, Ahmad, Mizouni, Rabeb, Mourad, Azzam, Otrok, Hadi, Bentahar, Jamal, Muhaidat, Sami, Talhi, Chamseddine, Dziong, Zbigniew, Guizani, Mohsen

arXiv.org Artificial Intelligence

The rapid evolution of communication networks in recent decades has intensified the need for advanced Network and Service Management (NSM) strategies to address the growing demands for efficiency, scalability, enhanced performance, and reliability of these networks. Large Language Models (LLMs) have received tremendous attention due to their unparalleled capabilities in various Natural Language Processing (NLP) tasks and generating context-aware insights, offering transformative potential for automating diverse communication NSM tasks. Contrasting existing surveys that consider a single network domain, this survey investigates the integration of LLMs across different communication network domains, including mobile networks and related technologies, vehicular networks, cloud-based networks, and fog/edge-based networks. First, the survey provides foundational knowledge of LLMs, explicitly detailing the generic transformer architecture, general-purpose and domain-specific LLMs, LLM model pre-training and fine-tuning, and their relation to communication NSM. Under a novel taxonomy of network monitoring and reporting, AI-powered network planning, network deployment and distribution, and continuous network support, we extensively categorize LLM applications for NSM tasks in each of the different network domains, exploring existing literature and their contributions thus far. Then, we identify existing challenges and open issues, as well as future research directions for LLM-driven communication NSM, emphasizing the need for scalable, adaptable, and resource-efficient solutions that align with the dynamic landscape of communication networks. We envision that this survey serves as a holistic roadmap, providing critical insights for leveraging LLMs to enhance NSM.


What fifty-one years of Linguistics and Artificial Intelligence research tell us about their correlation: A scientometric review

Shormani, Mohammed Q.

arXiv.org Artificial Intelligence

There is a strong correlation between linguistics and artificial intelligence (AI), best manifested by deep learning language models. This study provides a thorough scientometric analysis of this correlation, synthesizing the intellectual production during 51 years, from 1974 to 2024. It involves 5750 Web of Science-indexed articles published in 2124 journals, which are written by 20835 authors belonging to 13773 research centers in 794 countries. Two powerful software, viz., CiteSpace and VOSviewer, were used to generate mapping visualizations of the intellectual landscape, trending issues and (re)emerging hotspots. The results indicate that in the 1980s and 1990s, linguistics and AI research was not robust, characterized by unstable publication over time. It has, however, witnessed a remarkable increase of publication since then, reaching 1478 articles in 2023, and 546 articles in January-March timespan in 2024, involving emerging issues and hotspots, addressing new horizons, new topics, and launching new applications and powerful deep learning language models including ChatGPT.


PILA: A Historical-Linguistic Dataset of Proto-Italic and Latin

Bothwell, Stephen, DuSell, Brian, Chiang, David, Krostenko, Brian

arXiv.org Artificial Intelligence

Computational historical linguistics seeks to systematically understand processes of sound change, including during periods at which little to no formal recording of language is attested. At the same time, few computational resources exist which deeply explore phonological and morphological connections between proto-languages and their descendants. This is particularly true for the family of Italic languages. To assist historical linguists in the study of Italic sound change, we introduce the Proto-Italic to Latin (PILA) dataset, which consists of roughly 3,000 pairs of forms from Proto-Italic and Latin. We provide a detailed description of how our dataset was created and organized. Then, we exhibit PILA's value in two ways. First, we present baseline results for PILA on a pair of traditional computational historical linguistics tasks. Second, we demonstrate PILA's capability for enhancing other historical-linguistic datasets through a dataset compatibility study.


Russia refurbishes outdated tanks to replace 3,000 lost in Ukraine, research center says

FOX News

Seven people, including three children, were killed in a Russian drone attack on a gas station in the Ukrainian city of Kharkiv on Saturday. Russia has lost more than 3,000 tanks in Ukraine - the equivalent of its entire pre-war active inventory - but has enough lower-quality armored vehicles in storage for years of replacements, a leading research center said on Tuesday. Ukraine has also suffered heavy losses since Russia invaded in February 2022, but Western military replenishments have allowed it to maintain inventories while upgrading quality, the International Institute for Strategic Studies said. Even after the loss of so many tanks - including an estimated 1,120 in the past year - Russia still has about twice as many available for combat as Ukraine, according to the IISS's annual Military Balance, a key research tool for defense analysts. Henry Boyd, the institute's senior fellow for military capability, said Russia had been roughly "breaking even" in terms of replacements.


AI for everybody: GOP, Dems unite behind public AI research center to 'democratize' the tech

FOX News

Fox News correspondent Gillian Turner has the latest on the president's focus amid calls for an impeachment inquiry on'Special Report.' Republicans and Democrats in the Artificial Intelligence Caucus are proposing the creation of a public research center that will give people and organizations access to the tools they need to create their own AI systems, even if they don't have access to billions of dollars in research funding. Lawmakers proposed the "Creating Resources for Every American To Experiment with Artificial Intelligence Act," or the CREATE AI Act, a bill that would establish the National Artificial Intelligence Research Resource (NAIRR). In January, a federal task force called for the creation of this body and estimated it would need about $440 million per year to get off the ground. The CREATE AI Act doesn't authorize that specific level of funding, but the bill signals that both parties are interested in establishing the NAIRR in order to ensure entities other than the billion- and trillion-dollar AI developers aren't the only ones developing this new technology.


South Korea's Nationwide Effort for AI Semiconductor Industry

Communications of the ACM

Samsung Electronics and SK Hynix, the world's leading memory semiconductor companies, have launched investment and employment plans for their AI semiconductor and foundry businesses. Samsung Electronics is trying to develop next-generation AI semiconductor products by leveraging its strengths in mobile chipset design and memory manufacturing. Samsung develops its own neural processing units (NPUs) and integrates them into multiple processing platforms, including the Exynos mobile processor and the Exynos auto processor series. Once secretly focusing solely on their own product development, major companies are starting to open up relations with the academia and research communities to learn the latest AI technologies and engage with well-educated researchers. Another notable product development direction is putting AI computation logic into memories.


Consultant for AI applications (f/m/x) at Helmholtz Zentrum München - Neuherberg near Munich

#artificialintelligence

Helmholtz Munich is a research center with the mission to discover personalized medical solutions for environmentally triggered diseases to promote a healthier society in a rapidly changing world. Germany's largest research organization, the Helmholtz Association, launched Helmholtz AI: This dedicated interdisciplinary platform develops and promotes applied Artificial Intelligence (AI) methods for the Association's main research fields (Health, Energy, Earth and Environment, Information, Space, Matter) in collaboration with its external and university partners. Its central unit operates at Helmholtz Munich in Munich. Our mission is to enable research scientists to leverage AI methods optimally. The Munich consultant team is focusing on Health and collaborates with researchers from the whole Association on projects on cancer and infection research, molecular medicine, as well as neurodegenerative and environmental diseases.


U.S.-Backed Researchers Use AI to Probe for Weaknesses in Drug Supply Chains

WSJ.com: WSJD - Technology

Any overreliance on foreign inputs in drug supply chains could leave the U.S. open to dire shortages in the event of conflict or natural catastrophe. The White House has flagged the potential disruption of the pharmaceutical supply chain as a national-security issue, saying these drugs are essential to the health and prosperity of the country. "The ever-changing threat environment, both natural and man-made, gives rise to numerous unforeseen challenges, such as to the pharmaceutical supply chain," said Jennifer Foley, a deputy director in DHS's science and technology directorate. Quantifind Inc., a company that normally does risk screening for financial institutions, will do the work, looking into supply chains for the Cross-Border Threat Screening and Supply Chain Defense Center of Excellence, a government-backed research center connected with Texas A&M University. Our Morning Risk Report features insights and news on governance, risk and compliance.


How Québec became a world-class AI powerhouse

#artificialintelligence

The use artificial intelligence (AI) is exploding across the planet as it evolves into an essential tool for a myriad of fields and industries. But to successfully implement the technology into business operations, IT leaders need to surround themselves with global experts while building a state-of-the-art ecosystem. The place to start looking for such experts in Canada is Québec, the nation's AI powerhouse. Seventh in the world – that's where the province ranks in the Global AI Index published by the British firm Tortoise Media, a ranking of the most competitive countries in AI. Canada overall comes in fourth place – a remarkable achievement of which Québec is a real driving force.